623 research outputs found
Learning Latent Representations for Speech Generation and Transformation
An ability to model a generative process and learn a latent representation
for speech in an unsupervised fashion will be crucial to process vast
quantities of unlabelled speech data. Recently, deep probabilistic generative
models such as Variational Autoencoders (VAEs) have achieved tremendous success
in modeling natural images. In this paper, we apply a convolutional VAE to
model the generative process of natural speech. We derive latent space
arithmetic operations to disentangle learned latent representations. We
demonstrate the capability of our model to modify the phonetic content or the
speaker identity for speech segments using the derived operations, without the
need for parallel supervisory data.Comment: Accepted to Interspeech 201
An Unsupervised Autoregressive Model for Speech Representation Learning
This paper proposes a novel unsupervised autoregressive neural model for
learning generic speech representations. In contrast to other speech
representation learning methods that aim to remove noise or speaker
variabilities, ours is designed to preserve information for a wide range of
downstream tasks. In addition, the proposed model does not require any phonetic
or word boundary labels, allowing the model to benefit from large quantities of
unlabeled data. Speech representations learned by our model significantly
improve performance on both phone classification and speaker verification over
the surface features and other supervised and unsupervised approaches. Further
analysis shows that different levels of speech information are captured by our
model at different layers. In particular, the lower layers tend to be more
discriminative for speakers, while the upper layers provide more phonetic
content.Comment: Accepted to Interspeech 2019. Code available at:
https://github.com/iamyuanchung/Autoregressive-Predictive-Codin
A Single Self-Supervised Model for Many Speech Modalities Enables Zero-Shot Modality Transfer
While audio-visual speech models can yield superior performance and
robustness compared to audio-only models, their development and adoption are
hindered by the lack of labeled and unlabeled audio-visual data and the cost to
deploy one model per modality. In this paper, we present u-HuBERT, a
self-supervised pre-training framework that can leverage both multimodal and
unimodal speech with a unified masked cluster prediction objective. By
utilizing modality dropout during pre-training, we demonstrate that a single
fine-tuned model can achieve performance on par or better than the
state-of-the-art modality-specific models. Moreover, our model fine-tuned only
on audio can perform well with audio-visual and visual speech input, achieving
zero-shot modality generalization for speech recognition and speaker
verification. In particular, our single model yields 1.2%/1.4%/27.2% speech
recognition word error rate on LRS3 with audio-visual/audio/visual input
- …